# Code generation enhancement
Mimo 7B RL 0530
MIT
MiMo is a series of 7B parameter models trained from scratch for inference tasks. Through optimized pre-training and post-training strategies, it performs excellently in mathematical and code reasoning tasks.
Large Language Model
Transformers

M
XiaomiMiMo
319
17
Phi 4 Reasoning Unsloth Bnb 4bit
MIT
Phi-4-reasoning is an advanced reasoning model developed by Microsoft, fine-tuned based on Phi-4, focusing on improving reasoning abilities in fields such as mathematics, science, and coding.
Large Language Model
Transformers Supports Multiple Languages

P
unsloth
1,969
2
Gemma 3 27b It Qat Q4 0 GGUF
This is an experimental re-quantized model created based on Google's Gemma-3-27b-it QAT Q4_0 quantized model, designed to test performance after re-quantization.
Large Language Model
G
Mungert
1,096
6
Reasoning SCE Coder V1.0
A 32B-parameter large language model constructed based on the SCE fusion method, integrating multiple high-performance pre-trained models
Large Language Model
Transformers

R
BenevolenceMessiah
235
3
Featured Recommended AI Models